Student Searching Behavior and the Web: Use of Academic Resources and Google
نویسندگان
چکیده
This article reports results of two user studies of search engine use conducted to evaluate the United Kingdom’s national academic sector digital information services and projects. The results presented here focus on student searching behavior and show that commercial Internet search engines dominate students’ information-seeking strategy. Forty-five percent of students use Google as their first port of call when locating information, with the university library catalogue used by 10 percent of the sample. Results of students’ perceptions of ease of use, success, time taken to search, and reasons for stopping a search are also presented. As part of its commitment to developing the use of electronic resources and infrastructures, including the Internet, as an educational resource, the United Kingdom has expended considerable funds to facilitate the convergence of new learning environments with digital library services and to develop a coherent Information Environment (IE) to support higher education (Ingram & Grout, 2002).1 The resulting IE is both an enabling infrastructure, designed to facilitate the interoperability of heterogeneous services, and an impressive collection of online resources. While it continues to expand in size, scope, and complexity, formative evaluation has been a key part of the IE. In recent years, a number of government-sponsored projects have sought to investigate and profile the way students use Jillian R. Griffiths, Centre for Research in Library and Information Management (CERLIM), Manchester Metropolitan University, Faculty of Humanities, Law and Social Science, Geoffrey Manton Building, Rosamond Street West, Manchester, M15 6LL, United Kingdom; and Peter Brophy, Director, Centre for Research in Library and Information Management (CERLIM), Manchester Metropolitan University, Faculty of Humanities, Law and Social Science, Geoffrey Manton Building, Rosamond Street West, Manchester, M15 6LL, United Kingdom. 540 library trends/spring 2005 electronic information services within higher and further education. This article focuses on student Web searching behavior and reports on some of the related studies conducted at the Centre for Research in Library & Information Management (CERLIM) at the Manchester Metropolitan University and at the Centre for Studies in Advanced Learning Technologies (CSALT) at Lancaster University. The results of these studies are significant not only to the IE but also to other subject portal projects and to online library research in general. Survey of Existing Search Engine Use Research We begin our analysis with an examination of recent research on search engine use. First we analyze research on general Internet users, and then we look at the work focusing on student users. Search engine usage is difficult to measure because search engines—and the Internet in general—are not controlled environments, such as a library home page or a specific information database. As such, it has been difficult to apply the traditional model of recall and precision used in evaluating information retrieval (IR) systems to Internet search engines (SEs). A further major limitation to search engine use research is that users are adopting different information-seeking strategies than those used in more traditional contexts (Ford, Wilson, Foster, Ellis, & Spink, 2002; Jansen, Spink, & Saracevic, 2000). Jansen also points out that the behavior of Web searchers follows the principle of least effort (Zipf, 1949). This has also been recorded by Marchionini (1992), who stated that “humans will seek the path of least cognitive resistance” (p. 156), and Griffiths (1996), who found that “increasing the cognitive burden placed on the user . . . can affect successful retrieval of information. Where an application required fewer actions from the user, greater success was achieved as there was less possibility for a user to make an error” (p. 203). An informative review of Web searching studies by Jansen and Pooch (2001) compares the searching characteristics of Web information seekers with those of users of traditional IR systems, but their study separates out Online Public Access Catalogue (OPAC) users from general IR system users. So, for example, they found that OPAC searchers express their information needs in queries of one to two terms, while Web searchers use approximately two terms and IR searchers six to nine terms per query. Searching session length also differed, with Web searchers usually using two queries per session and typically viewing no more than ten documents from the results list, OPAC searchers using two to five queries and viewing fewer than fifty documents, and IR searchers using seven to sixteen queries and viewing ten documents per session. In addition, while 37 percent of IR searchers use Boolean operators, only 8 percent of Web searchers and 1 percent of OPAC searchers use more advanced searches. 541 griffiths & brophy/student searching behavior Other observations of the average Web searcher (Spink, Wilson, Ellis, & Ford, 1998; Ellis, Ford, & Furner, 1998) point out that ineffective use may be caused by a lack of understanding of how a search engine interprets a query. Few users are aware of whether or not a search service defaults to “and” or “or” and expect a search engine to automatically discriminate between single terms and phrases. Also, devices such as relevance feedback work well if the user ranks ten or more items, when in reality users will only rank one or two items for feedback (Croft, 1995). Koll (1993) found that users provide few clues as to what they want, approaching a search with an attitude of “I’ll know it when I see it,” which creates difficulties in formulation of a query statement. Larsen (1997) is of the opinion that Internet search systems will evolve to meet the behavior of the average Web searcher. Thus it can be seen that there has been a shift toward the introduction of search features that appear to respond to the ways in which users actually search these systems, for example, search assistance, query formulation, query modification, and navigation. The notion that improved interaction may be key to improving results is attractive in principle but not necessarily true in reality. Nick Lethaby of Verity Incorporated, paraphrased in Andrews (1996), pointed out that users do not want to interact with a system beyond entering in a few keywords. A separate research project conducted to develop a methodology for the evaluation of Internet Search Engines from a user’s perspective (DEVISE—Dimensions in Evaluation of Internet Search Engines) also found that interaction was little valued by users as the Interaction dimension had the weakest correlation with users’ overall rating of satisfaction, where Efficiency had the strongest correlation, followed by Effectiveness, Utility, and then Interaction ( Johnson, Griffiths, & Hartley, 2001, 2003). It can thus be assumed that most users will not use advanced search features, nor enter complex queries, nor want to interact with search systems. As a consequence, systems such as search engines are now trying to automate query formulation, shifting the burden of formulating precise or extensive terminology from the user to the system. Student Studies Beyond general studies of search engine users, a number of studies have focused on the student population. Cmor and Lippold (2001) put forward a number of observations from their experiences of student searching behavior on the Web. These findings can be summarized as follows: (1) students use the Web for everything; (2) they may spend hours searching or just a few minutes; (3) searching skills vary and students will often assess themselves as being more skilled than they actually are; and (4) they will give discussion list comments the same academic weight as peer-reviewed journal articles. 542 library trends/spring 2005 Navarro-Prieto, Scaife, and Rogers (1999) sought to develop an empirically based model of Web searching in which twenty-three students were recruited from the School of Cognitive and Computer Science at the University of Sussex. Ten of these participants were computer science students and thirteen were psychology students. Their findings highlight a number of interesting points: (1) while the computer science students are more likely to be able to describe how search engines develop their databases, neither of the two groups has a clear idea of how search engines use the queries to search for information; (2) most participants considered their levels of satisfaction with the results of their search to be “good” or “OK,” and (3) most participants cannot remember their searches and tend to forget those search engines and queries that did not give any successful results. From their research Navarro-Prieto, Scaife, and Rogers (1999) were able to identify three different general patterns of searching: 1. Top-down strategy, where participants searched in a general area and then narrowed down their search from the links provided until they found what they were looking for. 2. Bottom-up strategy, where participants looked for a specific keyword provided in their instructions and then scrolled through the results until they found the desired information. This strategy was most often used by experienced searchers. 3. Mixed strategies, where participants used both of the above in parallel. However, this last approach was only used by experienced participants. Twidale, Nichols, Smith, and Trevor (1995), in a study that informed the development of the online journal on digital archiving, Ariadne, considered the role of collaborative learning during information searching. Quoting relevant literature, they identified the common searching problems as retrieving zero hits; retrieving hundreds of hits; frequent errors; little strategy variation; and locating few of the relevant records. The only specific searching issue addressed was that of “errors made in searching,” which described how simple typing errors in a sound strategy led to few hits and subsequently led to the strategy being abandoned. More general observations revealed a number of collaborative interactions between students, which were noted as the following: (1) students will often work in groups (containing 2–4 individuals) around a single workstation, discussing ideas and planning their next actions; (2) groups work on adjacent workstations, discussing what they are doing, comparing results, and sometimes seeming to compete to find the information; (3) individuals work on adjacent workstations, occasionally leaning over to ask their neighbor for help, and (4) individuals work at separate workstations monitoring the activity of others. 543 griffiths & brophy/student searching behavior Finally, a large-scale, UK-funded study, called the User Behaviour Monitoring and Evaluation Framework, was designed to investigate and profile the use of electronic information services by students within higher and further education in the UK. The framework specifically focuses on the development of a longitudinal profile of the use of electronic information services (EIS) and the development of an understanding of the triggers of and barriers to use (Banwell et al., 2004)). Within this framework, two different research projects (now completed) were created to evaluate service usage trends. The JUSTEIS project ( JISC Usage Survey Trends: Trends in Electronic Information Service) surveyed trends in electronic information service usage; the JUBILEE project ( JISC User Behaviour in Information Seeking: Longitudinal Evaluation of Electronic Information Services) undertook a longitudinal study of electronic information service use. JUBILEE and JUSTEIS found that undergraduate students mainly use electronic information systems for academic purposes connected to assessment, although some leisure use was reported, and use of search engines predominated over all other types of electronic information systems. Postgraduate students undertaking a degree by research were observed to have a different pattern of use from that of postgraduate students undertaking a degree on a taught course, and overall some of the postgraduate students used JISC-negotiated services and specialist electronic information systems more than undergraduates. Use of electronic journals by both academic staff and postgraduate students was relatively infrequent. Patterns of use of electronic information systems varied among subject disciplines, and academic staff were found to exert a greater influence over undergraduate and postgraduate use of electronic information systems than library staff. In addition, friends, colleagues, and fellow students were also influential. Different models of information skills provision and support were found in the different institutions and different disciplines participating in these studies. Banwell et al. (2004) suggest that patterns of use of electronic information systems become habitual. The EDNER and EDNER+ Studies The search engine usage project we have been involved with since 2000 is called the Evaluation of the Distributed National Electronic Resource (EDNER) Project.2 Since its successful completion in 2003, we were awarded a one-year extension until July 2004, hence the additional title, EDNER+. The aim of the EDNER studies was to develop understanding of users’ searching behavior within the IE by asking them to assess a selection of IE services according to a range of defined criteria—Quality Attributes. Given the limitations of search engine research and the shift in recent years from the usage of performance indicators to measures of outcome and impact within libraries (Brophy, 2004), we have developed a Quality Attributes approach for this research. 544 library trends/spring 2005 The classic definition of quality as “fitness for a purpose” was developed by Garvin (1987) into an eight dimension, or attribute, model, which can be used as a framework for determining the overall quality of a product or service. This approach has since been adapted for use in libraries and information services by Marchand (1990), Brophy and Coulling (1996), Brophy (1998), and Griffiths and Brophy (2002). Griffiths and Brophy adapted the Quality Attributes by changing the emphasis of one attribute, changing the concept of one attribute, and introducing two additional attributes (Currency and Usability), thus producing a set of ten attributes: Performance, Conformance, Features, Reliability, Durability, Currency, Serviceability, Aesthetics, Perceived Quality, and Usability. A further discussion and presentation of results related to individual attributes is given by Griffiths (2003). The work reported here focuses on results related to discovery and location of resources, resource use, and students’ perceptions of quality For the first EDNER study, test searches were designed (one for each of the services to be used by the participants, fifteen in total). These searches were of sufficient complexity to challenge the user without being impossible to answer and were individually tailored for each of the services evaluated. Participants were recruited via Manchester Metropolitan University’s Student Union Job Shop; twenty-seven students from a wide course range took part in the study, and each student was paid for his or her participation. One-third of the sample consisted of students from the Department of Information and Communications who were studying for an information and library management degree, while the remaining two-thirds of the sample were studying a wide variety of subjects (being at various stages of their studies). No restrictions were placed on them having computer experience, Internet experience, or familiarity with search engines. Testing was conducted in a controlled environment based within the Department of Information and Communications. Each participant searched for the fifteen test queries and completed questionnaires for each task undertaken. The EDNER+ study investigated student use of eighteen services, which were selected from the presentation layer of the IE. Follow-up questions related to the first EDNER study were included. Individual tasks were created for each service, questionnaires were developed and piloted, and methods of analysis were agreed upon. Thirty-eight students were recruited from thirty-four subjects across the university. These students then undertook two days of searching. None of these participants was studying for an Information and Library Management degree. Each participant used all eighteen services and provided feedback on each service via individual questionnaires. Subjects studied included art, sociology, Spanish, primary education, English, law, and computing. Data gathered during both studies were analyzed in two ways: quantitative data were analysed using SPSS (Statistical Package for the Social 545 griffiths & brophy/student searching behavior Sciences), and open-response question data were analyzed using qualitative techniques. Results The EDNER studies were concerned with two main questions: (1) how do students discover and locate information, and (2) how do services (and aspects of services) rate in a student evaluation, and what criteria are most important to them (results of this work are presented in Griffiths, 2003). The following section presents a selection of the results related to discovery and location of information. Students’ Use of Search Engines Dominates Their Information-Seeking Strategy Students were asked to find information on fifteen set tasks, designed to be typical of information seeking in an academic environment, and to complete a questionnaire after each task. Every time they started a new task we asked them where they went first to try to find relevant information. The following presents the most frequently cited starting points as found in the first EDNER study: • 45 percent of students used Google as their first port of call when locating information • The second most highly used starting point was the university OPAC, used by 10 percent of the sample • Next comes Yahoo, used by 9 percent of the students as the first source they tried • Lycos was used first by 6 percent • AltaVista, Ask Jeeves, and BUBL were all used as a first resource by 4 percent (each) of the sample of students Results from the EDNER+ study found that • 22 out of 38 participants use an SE every day • 2 use an SE three to six times a week • 9 use an SE once or twice a week • 2 use an SE every other week • 3 use an SE once or twice a month Of the search engines chosen, 23 used Google, 4 used a combination of Google and Yahoo, 3 used Yahoo, and 5 used a combination of a variety of SEs. Some students exhibited confusion regarding services, listing the library catalogue and the BBC as search engines they had used. It is clear that the majority of participants use a search engine in the first instance. This concurs with the JUBILEE and JUSTEIS results, which found that use of SEs predominates over all other types of EIS. Search engines are liked for their familiarity and because they have provided successful results on previous occasions. Individual search engines were frequently described by students as “my personal favourite,” and phrases such as “tried and 546 library trends/spring 2005 tested,” “my usual search engine,” and “trusted” were frequently given by the students when asked why they chose this source first. Google’s popularity was also expressed in many comments about the service, such as: “Google is very straight forward. You put in your word and it searches. It also corrects spellings to rectify your search. Bright, eyecatching—simple. Not confusing”; “Most popular search engine. I always use this for any search”; and “I find the site very helpful. It seems to have whatever I want. I’m happy with it. It is simple but complete.” Students’ Use of Academic Resources Is Low After search engines, the most frequent starting point was the university library OPAC, followed to a lesser degree by a known academic resource. Thus BUBL, Emerald, Ingenta, and BIDS were all mentioned by participants. There was a very marked difference between information and library management students and those studying for other degrees. The former group was much more likely to prefer these academic resources as a first search tool for similar reasons as the search engine users. Comments such as “Quick and easy to find,” “used to it,” “thought it would have the relevant information,” and “I always use the University electronic journal search first” were typical among these students. Again, ease of use, familiarity, and reliability were key factors in their choice. Information and library management students used the library OPAC to provide details of, and access to, journals. As might be expected, they knew that they would find such information there. They expected to use “bibliographic databases across different subject disciplines,” and they also more frequently sought out access to sites with “academic information as opposed to commercial.” Some displayed quite a detailed knowledge of the resources available through the library Web site. One student searching for a parenting article “assumed PsychInfo would have an abstract of the article and you can search by either author or keyword”; another, wanting a source on using questionnaires to collect data, “thought there might be something on methodology in the statistics section.” The information management students used the library home page to find a route to subject categories too. For example, one user seeking information on wildlife tours looked for an “organisation on safaris” via the Tourism link; another chose the Biology link as a possible, though unsuccessful, route to an image of the brain. A third “thought the library home page would have a section for science in general” (it does not) from where one can look for a link to the NASA Web site and then tried the Web of Science before resorting to Google. This group of students also made more frequent use of services such as BUBL, Emerald, Ingenta, and BIDS. One described BUBL as a “known academic resource with selected/quality sites of interest to academic disciplines” and used it to answer a question on early dynastic Egypt. In con547 griffiths & brophy/student searching behavior trast, another user made the point that, when searching for an article, it was easier to try Google first—“quite good at finding articles”—because otherwise there would be a need to “look at a few different databases, e.g., Emerald, BUBL etc.” Only two students mentioned using the Resource Discovery Network (RDN) (although in no instance was it the first action taken): one seeking information on research methodology, and another as a possible route to an image of the brain, though this was unsuccessful and the student resorted to Yahoo. The library OPAC was also used by non–information management students to locate information, though to a much lesser degree, and always when looking for journals or articles. “I thought the library pages listed all articles” and “Thought it (an article) was most likely to be in the library catalogue” were two reasons given. One student used the library OPAC because “I knew that the University holds a large source of electronic journals.” However, this action was taken only when a search engine search had failed. Another user searching for an article on “parenting” resorted to the library OPAC because part of it “is medically based so I thought it would be the best place to look.” Other comments indicate some confusion amongst students about the OPAC, describing it as “A search engine for the library, to find books and catalogues” and “With this search engine . . . it is easy and straight forward to use.” It seems that students’ use of resources is now very colored by their experience with search engines, which in turn may lead to expectations that may not be realistic for different types of services. Among all users, the library OPAC was chosen for its familiarity, its ease of use, its ability to retrieve relevant information, and mostly because there was a clear expectation among some participants that certain types of information resources would be found there. The fact that the most frequent users were information management students might suggest that, when lecturers are aware of and train their students to use the resources that the library provides, their students will become familiar with them and will use them. If this is not done, the status quo approach seems to be resorting to a search engine, with varying degrees of success. Levels of use of the library OPAC recorded by the EDNER+ study showed that • 4 out of 38 participants had never used the library OPAC • 4 only use it occasionally • 10 use it once or twice a month • 3 use it every other week • 10 use it once or twice a week • 1 uses it 3 to 6 times a week and • 5 use it every day One participant failed to report his/her level of use. 548 library trends/spring 2005 Bibliographic database use was recorded as follows: • 21 out of 38 participants never use bibliographic databases • 3 use them occasionally • 6 use them once or twice a month • 3 use them every other week • 4 use them once or twice a week and • 1 student reported that he/she uses them three to six times a week Of the students who do use bibliographic databases, 3 stated that they use Web of Science, 3 stated that they use Emerald, and 2 listed FAME. All other bibliographic databases were only listed by one participant each: these included SOSIG, Ingenta, Butterworths, Lexis Nexis, and Questia Social Science Library. Use of Amazon.com for locating information, especially about videos, proved to be popular. Four users looking for the Manchester distributor of an Albert Einstein video went immediately to Amazon to seek this information because “Amazon is a global source for videos” that “sometimes has distribution details and other possible names for the video.” Perceptions of Use, Success, and Why Students Stop Searching When participants were asked how easy it was to locate information, the following responses were recorded: • 50 percent found it easy to locate the required information • 35 percent found it difficult • 15 percent had no view either way Participants’ reasons for finding tasks easy included: “Easy enough to find using the search engines”; “Easier to find formal institutions because they usually have a Web site and these are more often than not listed as recommended sites on the library home page via corresponding subject pages”; “Very easy and direct search taking a small amount of time”; and “It was easy once I went back to Google. Ingenta just messed me about.” Where participants found a task difficult, the following comments were made: “Why doesn’t someone make a good search engine devoted to articles? It’s hard to find an article without an author”; “It is very difficult to search for something specific”; “It was easy to find an abstract, I just couldn’t find the full article,” and “Got disheartened.” When participants were asked to locate a Web site to find specific information, 70 percent responded that they were successful, and 30 percent responded that they were unsuccessful. When asked to find information via a specific service, 74 percent responded that they were successful, and 17 percent were unsuccessful (9 percent did not know). Check questions were included to ensure that participants were not overgenerous in their reports of success. From these results it is clear that, even when users can find information, 549 griffiths & brophy/student searching behavior it is not always an easy task. This may have serious implications for developers of services as a number of studies (Griffiths, 1996; Johnson, Griffiths, & Hartley, 2001) have shown that users will often trade performance for the path of least cognitive resistance (minimum effort and time). Students were asked to search for as long (or short) a time as they wanted provided that they spent no longer than 30 minutes on any one service. This upper limit was imposed as a result of other research (Craven & Griffiths, 2002), which found that the average time taken to search for information is between 15 and 19 minutes. The majority of students in this study spent an average of between 1 and 15 minutes searching for information. The DEvISE project ( Johnson, Griffiths, & Hartley, 2001) also found that Efficiency correlated most strongly with General Satisfaction, with Effectiveness second, which may suggest that the amount of time and effort required from the user matters more than the relevance of the items found. Students were also asked why they stopped trying to locate information, with the following reasons given: • Found information = 70 percent • Unable to find Web site within time allowed = 15 percent • Could not find a Web site and gave up = 12 percent • Technical problems affected search = 3 percent Participants who were unable to find a Web site within the time allowed usually stated that they had run out of time. Among those who “Couldn’t find a Web site and gave up,” frustration at being unable to complete the task was expressed. “It is frustrating when you can’t find what you are looking for” or “frustration; all sites were irrelevant” were typical remarks. The lack of success was described as “hitting a brick wall” or not “getting anywhere.” Some admitted that they simply did not have any further search strategies, saying they “Don’t know where else to search for it,” “I have searched everywhere I can think of,” or “didn’t know where else to go.” This frustration was also reflected in some of the comments of those who encountered “Technical problems.” These problems were usually expressed as “slowness.” “Internet was very slow” was the most usual comment. “Taking ages to get to some sites,” “Server could not contact host and very slow for pages to show,” or “Pages would not open” were other complaints. One respondent remarked that he/she “decided to stop, as if I was doing a search for myself I would not have spent that much time.” It may be frustrating for the developers of resources to accept that speed of access may be a criterion on which users will evaluate a service, but studies have shown that this is an important indicator for some users ( Johnson, Griffiths, & Hartley, 2001). One respondent gave a very simple reason for stopping—“Teatime!” Student Perceptions of Quality One of the main aims of the IE is to provide a managed quality resource for staff and students in higher and further 550 library trends/spring 2005 education. During discussions with various stakeholders involved with the development of the IE, it became clear that common definitions of what is meant by quality electronic resources could not be assumed. Therefore, participants were asked during testing to indicate what quality meant to them in terms of information available via electronic services (they were not asked to relate their responses to any one particular service). Four criteria were presented to them with which they could either agree or disagree. Participants were also asked to add any additional criteria that were not listed but were important to them. Table 1 presents their responses. Table 1. Student’s Responses to Definitions of Quality Criteria Reliable Current Accurate Refereed
منابع مشابه
The Impact of the Objective Complexity and Product of Work Task on Interactive Information Searching Behavior
Background and Aim: this study aimed to explore the impact of objective complexity and Product of work task on user's interactive information searching behavior. Method: The research population consisted of MSc students of Ferdowsi university of Mashhad enrolled in 2012-13 academic year. In 3 stages of sampling (random stratified, quota, and voluntary sampling), 30 cases were selected. Each of ...
متن کاملBehavioral Considerations in Developing Web Information Systems: User-centered Design Agenda
The current paper explores designing a web information retrieval system regarding the searching behavior of users in real and everyday life. Designing an information system that is closely linked to human behavior is equally important for providers and the end users. From an Information Science point of view, four approaches in designing information retrieval systems were identified as system-...
متن کاملAcademic procrastination and its characteristics: A Narrative Review
Background: Time management, especially at university is an important factor contributing to the academic success of students. However, majority of students relentlessly delay their academic work. Drawing upon the literature, we attempted to identify the characteristics of academic procrastination. Methods: For the review, articles, books and theses in English and Persian were identified with a...
متن کاملFactors related to academic failure in preclinical medical education: A systematic review
Introduction: Identifying the learners’ problems early enoughand providing advice from the beginning is definitely an importantinvestment in the training and progress of future practitioners. Thecurrent review aimed at examining factors related to academicfailure of the preclinical medical students.Methods: The study was carried out as a systematic search ofpublications in the following databas...
متن کاملA rapid method to increase transparency and efficiency in web-based searches
Background: Many online search facilities allow searching for academic literature. The majority are bibliographic databases that catalogue published research in an iterative, semi-automated manner, e.g. Web of Science Core Collections, which indexes articles published in selected journals. Other resources, such as Google Scholar, identify academic articles by using search engines that crawl the...
متن کاملe Factors Affecting the Unplanned Behavior of Users to Use Academic Libraries Resources and Services
Background and Aim: The present study aimed at investigating the factors affecting the unplanned behavior of users to use academic library resources and services. Methods: The present study is considered as being an applied one in terms of purpose ,and it has has been conducted using descriptive-survey method. Research population consist of the students of central libraries at Ferdowsi Universi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Library Trends
دوره 53 شماره
صفحات -
تاریخ انتشار 2005